Skip to content

Conversation

@virajjasani
Copy link
Contributor

@virajjasani virajjasani commented Sep 10, 2019

for (ServerMetrics serverMetrics : serverMetricsMap.values()) {
Map<byte[], RegionMetrics> regionMetricsMap = serverMetrics.getRegionMetrics();
for (RegionMetrics regionMetrics : regionMetricsMap.values()) {
final int regionStoreRefCount = regionMetrics.getStoreRefCount();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This count is the sum of ref count of all HFiles under each of the CF in this region right? So the sum is not just a factor of many refs on a file. It is a function of number of hfiles under this region. What is it is having so many files at a time and each of that having say <10 active refs. So 250 is obviously not a correct number IMO. Is there a way we can see refs per HFile? No need to know ref counts on each of the HFile. Max number may be enough. I believe this decision should be made based on the so many ref counts on a file rather than sum of. Thoughts?


private List<HRegionLocation> prepareRegionsForReopen(MasterProcedureEnv env) {
List<HRegionLocation> regionsToReopenList = new ArrayList<>();
List<HRegionLocation> tableRegionsForReopen = env.getAssignmentManager()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better pass this List to this private method than passing env.

}

private List<HRegionLocation> prepareRegionsForReopen(MasterProcedureEnv env) {
List<HRegionLocation> regionsToReopenList = new ArrayList<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can avoid the terms like List/Map from the variable name?

@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
💙 reexec 0m 37s Docker mode activated.
_ Prechecks _
💚 dupname 0m 0s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 1s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 1m 3s Maven dependency ordering for branch
💚 mvninstall 5m 27s master passed
💚 compile 3m 4s master passed
💚 checkstyle 2m 40s master passed
💙 refguide 5m 38s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 4m 54s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 5m 15s master passed
💙 spotbugs 4m 14s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 28m 6s master passed
_ Patch Compile Tests _
💙 mvndep 0m 15s Maven dependency ordering for patch
💚 mvninstall 6m 24s the patch passed
💚 compile 3m 48s the patch passed
💚 cc 3m 48s the patch passed
💚 javac 3m 48s the patch passed
💚 checkstyle 3m 14s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 1s The patch has no ill-formed XML file.
💙 refguide 7m 14s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 6m 4s patch has no errors when building our shaded downstream artifacts.
💚 hadoopcheck 20m 8s Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
💚 hbaseprotoc 10m 39s the patch passed
💚 javadoc 6m 9s the patch passed
💚 findbugs 30m 41s the patch passed
_ Other Tests _
💚 unit 250m 39s root in the patch passed.
💚 asflicense 5m 19s The patch does not generate ASF License warnings.
417m 25s
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/2/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux ac7bbf7c4a4f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / cb62f73
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/2/artifact/out/branch-site/book.html
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/2/artifact/out/patch-site/book.html
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/2/testReport/
Max. process+thread count 5458 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/2/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.

int getStoreRefCount();

/**
* @return the max reference count for any store among all stores of this region
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the max ref count on a Store or on a StoreFile? The latter right? Pls change the javadoc and name of method accordingly. That will be very clear name then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And name to be corrected accordingly in all related places.

storeFileCount += r.getStoreFileCount();
int currentStoreRefCount = r.getStoreRefCount();
storeRefCount += currentStoreRefCount;
maxStoreRefCount = Math.max(maxStoreRefCount, currentStoreRefCount);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RegionMetrics should give maxStoreFileRefCount also and that should be considered here.

<name>hbase.regions.recovery.store.count</name>
<value>256</value>
<description>
Store Ref Count threshold value considered
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls correct this desc accordingly


private static final String REGIONS_RECOVERY_INTERVAL =
"hbase.master.regions.recovery.interval";
private static final String STORE_REF_COUNT_THRESHOLD = "hbase.regions.recovery.store.count";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is store file ref count for recovery. Pls change config name such a way to indicate this.

"Error reopening regions with high storeRefCount. ";

private final HMaster hMaster;
private final int storeRefCountThreshold;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This var name too

@Override
protected void chore() {
if (LOG.isTraceEnabled()) {
LOG.trace("Starting up Regions Recovery by reopening regions based on storeRefCount...");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Starting up Regions Recovery chore for reopening .......

LOG.error("Error while reopening regions based on storeRefCount threshold", e);
}
if (LOG.isTraceEnabled()) {
LOG.trace("Exiting Regions Recovery by reopening regions based on storeRefCount...");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Log to be corrected.

// is beyond a threshold value, we should reopen the region.
// Here, we take max ref count of all stores and not the cumulative count
// of all stores.
final int maxStoreRefCount = regionMetrics.getMaxStoreRefCount();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All places maxStoreFileRefCount pls.

// For each region, each store file can have different ref counts
// We need to find maximum of all such ref counts and if that max count
// is beyond a threshold value, we should reopen the region.
// Here, we take max ref count of all stores and not the cumulative count
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max count on store or on a store file? Pls make it consistent. The above line says it is store file and that is what ideally needed too. I doubt whether we are doing that. I think what we actually doing is passing the max store ref count (sum of ref counts of all store files under that store)

// Here, we take max ref count of all stores and not the cumulative count
// of all stores.
final int maxStoreRefCount = regionMetrics.getMaxStoreRefCount();
if (maxStoreRefCount > storeRefCountThreshold) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should consider the system table different? META table might some time get too large ref count. It is possible. May be many new clients are started/restarted or many regions move/split etc. What if at the time of reporting it had a huge ref count. Reopening of META will affect the whole system! Anyways the def count of 256 (of now) is way less for any kind of regions IMO.

Copy link
Contributor Author

@virajjasani virajjasani Sep 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, better we exclude META.
Updated PR with all the suggestions, started using store file's max refCount

@Apache-HBase
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
💙 reexec 1m 18s Docker mode activated.
_ Prechecks _
💚 dupname 0m 0s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 0m 27s Maven dependency ordering for branch
💚 mvninstall 5m 40s master passed
💚 compile 3m 14s master passed
💚 checkstyle 2m 55s master passed
💙 refguide 6m 10s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 0s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 4m 56s master passed
💙 spotbugs 14m 53s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 26m 4s master passed
_ Patch Compile Tests _
💙 mvndep 0m 13s Maven dependency ordering for patch
💚 mvninstall 5m 29s the patch passed
💚 compile 3m 25s the patch passed
💚 cc 3m 25s the patch passed
💚 javac 3m 25s the patch passed
💚 checkstyle 2m 53s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 1s The patch has no ill-formed XML file.
💙 refguide 6m 23s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 19s patch has no errors when building our shaded downstream artifacts.
💚 hadoopcheck 18m 6s Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
💚 hbaseprotoc 9m 57s the patch passed
💚 javadoc 5m 36s the patch passed
💚 findbugs 29m 34s the patch passed
_ Other Tests _
💔 unit 262m 27s root in the patch failed.
💚 asflicense 3m 3s The patch does not generate ASF License warnings.
416m 38s
Reason Tests
Failed junit tests hadoop.hbase.client.TestSnapshotTemporaryDirectory
hadoop.hbase.tool.TestSecureBulkLoadHFilesSplitRecovery
hadoop.hbase.tool.TestBulkLoadHFilesSplitRecovery
Subsystem Report/Notes
Docker Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/4/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux 1491451188f2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / cb04c6c
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/4/artifact/out/branch-site/book.html
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/4/artifact/out/patch-site/book.html
unit https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/4/artifact/out/patch-unit-root.txt
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/4/testReport/
Max. process+thread count 5179 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/4/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.

storeCount += r.getStoreCount();
storeFileCount += r.getStoreFileCount();
storeRefCount += r.getStoreRefCount();
maxStoreFileRefCount += r.getMaxStoreFileRefCount();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should do max() here.

Copy link
Contributor Author

@virajjasani virajjasani Sep 24, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is addition of all counts for toString() right? I thought of doing max() but somehow I feel all these counts should be addition of all regionMetrics for the sake of toString() only. Still, I agree better to include max of all storeFileRefCount

<description>
Regions Recovery Chore interval in milliseconds.
This chore keeps running at this interval to
find all regions with high store ref count and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Say above a configurable max value

</description>
</property>
<property>
<name>hbase.master.regions.recovery.interval</name>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Name it like hbase.master.regions.recovery.check.interval (?) Is that a better name?

int maxStoreFileRefCount = 0;
Collection<HStoreFile> hStoreFiles = this.storeEngine.getStoreFileManager()
.getStorefiles();
for (HStoreFile storeFile : hStoreFiles) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return this.storeEngine.getStoreFileManager().getStorefiles().stream()
.filter(sf -> sf.getReader() != null).filter(HStoreFile::isHFile)
.mapToInt(HStoreFile::getRefCount).max()
??

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes same thing but with streams, so better

</property>
<property>
<name>hbase.regions.recovery.store.file.count</name>
<value>128</value>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be a default of 1000 or so better.
Ya safe side leave META region from this as of now?

@virajjasani virajjasani force-pushed the HBASE-22460-master branch 3 times, most recently from f58691f to 86a1644 Compare September 25, 2019 20:08
@Apache-HBase
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
💙 reexec 0m 30s Docker mode activated.
_ Prechecks _
💚 dupname 0m 0s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 0m 35s Maven dependency ordering for branch
💚 mvninstall 5m 45s master passed
💚 compile 3m 20s master passed
💚 checkstyle 3m 10s master passed
💙 refguide 6m 42s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 20s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 5m 10s master passed
💙 spotbugs 3m 55s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 26m 6s master passed
_ Patch Compile Tests _
💙 mvndep 0m 14s Maven dependency ordering for patch
💚 mvninstall 5m 24s the patch passed
💚 compile 3m 16s the patch passed
💚 cc 3m 16s the patch passed
💚 javac 3m 16s the patch passed
💚 checkstyle 3m 1s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 1s The patch has no ill-formed XML file.
💙 refguide 6m 11s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 1s patch has no errors when building our shaded downstream artifacts.
💚 hadoopcheck 16m 59s Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
💚 hbaseprotoc 9m 12s the patch passed
💚 javadoc 4m 58s the patch passed
💚 findbugs 27m 5s the patch passed
_ Other Tests _
💔 unit 207m 59s root in the patch failed.
💚 asflicense 3m 41s The patch does not generate ASF License warnings.
358m 41s
Reason Tests
Failed junit tests hadoop.hbase.snapshot.TestExportSnapshotNoCluster
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/8/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux d61acd0cdae3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / 52f5a85
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/8/artifact/out/branch-site/book.html
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/8/artifact/out/patch-site/book.html
unit https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/8/artifact/out/patch-unit-root.txt
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/8/testReport/
Max. process+thread count 5091 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/8/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.


// Regions Recovery based on high storeFileRefCount threshold value
public static final String STORE_FILE_REF_COUNT_THRESHOLD =
"hbase.regions.recovery.store.file.count";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You missed ref from the config name. Its not store file count threshold but the ref count threshold

</description>
</property>
<property>
<name>hbase.regions.recovery.store.file.count</name>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya change here too

choreService.cancelChore(this.replicationBarrierCleaner);
choreService.cancelChore(this.snapshotCleanerChore);
choreService.cancelChore(this.hbckChore);
choreService.cancelChore(this.regionsRecoveryChore);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the chore was not scheduled, this cancel call wont make any issue right? Just confirming

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it won't make any issues, I confirmed it

</property>
<property>
<name>hbase.regions.recovery.store.file.count</name>
<value>-1</value>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should allow users to change this with out restarting the HM. But on a followup issue

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean similar to _switch command?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No.. Via ConfigurationObserver way and allow changes to be impacted with out restart of process

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, looking forward to it on follow up Jira

// count of all store files
final int maxStoreFileRefCount = regionMetrics.getMaxStoreFileRefCount();
// ignore store file ref count threshold <= 0 (default is -1 i.e. disabled)
if (storeFileRefCountThreshold > 0 && maxStoreFileRefCount > storeFileRefCountThreshold) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

storeFileRefCountThreshold > 0 check is a redundant one here and can be avoided?


// Specify specific regions of a table to reopen.
// if specified null, all regions of the table will be reopened.
private final List<byte[]> regionNamesList;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls avoid List, Map etc from the variable name if possible.

}
regions =
env.getAssignmentManager().getRegionStates().getRegionsOfTableForReopen(tableName);
List<HRegionLocation> tableRegionsForReopen = env.getAssignmentManager()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be all the regions of this table right? Better name for the var will be tableRegions.

env.getAssignmentManager().getRegionStates().getRegionsOfTableForReopen(tableName);
List<HRegionLocation> tableRegionsForReopen = env.getAssignmentManager()
.getRegionStates().getRegionsOfTableForReopen(tableName);
regions = prepareRegionsForReopen(tableRegionsForReopen);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the method is not self explaining. What prepare it is doing? This basically gets the Region locations of the mentioned region names right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can even pass the regionNamesList also here.



[[hbase.regions.recovery.store.file.count]]
*`hbase.regions.recovery.store.file.count`*::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here also the config name to be corrected.

+
.Description

Store files Ref Count threshold value considered
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can say bit more detail here. Very large ref count on a file indicates that its a ref leak on that object. Such files can not get removed even after its invalidation via compaction or so. Only way to come out of such situation is the reopen of the region

@virajjasani
Copy link
Contributor Author

Updated the PR based on latest comments. Please review @anoopsjohn

@Apache-HBase
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
💙 reexec 0m 44s Docker mode activated.
_ Prechecks _
💚 dupname 0m 0s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 0m 36s Maven dependency ordering for branch
💚 mvninstall 5m 51s master passed
💚 compile 3m 22s master passed
💚 checkstyle 3m 11s master passed
💙 refguide 6m 25s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 30s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 5m 50s master passed
💙 spotbugs 3m 59s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 27m 30s master passed
_ Patch Compile Tests _
💙 mvndep 0m 16s Maven dependency ordering for patch
💚 mvninstall 6m 2s the patch passed
💚 compile 3m 36s the patch passed
💚 cc 3m 36s the patch passed
💚 javac 3m 36s the patch passed
💚 checkstyle 3m 8s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 1s The patch has no ill-formed XML file.
💙 refguide 6m 56s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 4m 44s patch has no errors when building our shaded downstream artifacts.
💚 hadoopcheck 16m 11s Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
💚 hbaseprotoc 8m 55s the patch passed
💚 javadoc 5m 17s the patch passed
💚 findbugs 25m 13s the patch passed
_ Other Tests _
💔 unit 211m 3s root in the patch failed.
💚 asflicense 4m 58s The patch does not generate ASF License warnings.
364m 16s
Reason Tests
Failed junit tests hadoop.hbase.snapshot.TestExportSnapshotNoCluster
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/9/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux cbfe87e8fc91 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / ca0d9f3
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/9/artifact/out/branch-site/book.html
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/9/artifact/out/patch-site/book.html
unit https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/9/artifact/out/patch-unit-root.txt
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/9/testReport/
Max. process+thread count 4660 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/9/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.

@Apache-HBase
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
💙 reexec 0m 31s Docker mode activated.
_ Prechecks _
💚 dupname 0m 0s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 0m 34s Maven dependency ordering for branch
💚 mvninstall 6m 8s master passed
💚 compile 3m 14s master passed
💚 checkstyle 2m 58s master passed
💙 refguide 6m 10s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 2s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 4m 56s master passed
💙 spotbugs 3m 59s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 25m 38s master passed
_ Patch Compile Tests _
💙 mvndep 0m 14s Maven dependency ordering for patch
💔 mvninstall 2m 48s root in the patch failed.
💔 compile 1m 53s root in the patch failed.
💔 cc 1m 53s root in the patch failed.
💔 javac 1m 53s root in the patch failed.
💚 checkstyle 2m 58s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 1s The patch has no ill-formed XML file.
💙 refguide 6m 19s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💔 shadedjars 3m 58s patch has 14 errors when building our shaded downstream artifacts.
💔 hadoopcheck 2m 5s The patch causes 14 errors with Hadoop v2.8.5.
💔 hadoopcheck 4m 17s The patch causes 14 errors with Hadoop v2.9.2.
💔 hadoopcheck 6m 30s The patch causes 14 errors with Hadoop v3.1.2.
💔 hbaseprotoc 0m 44s hbase-server in the patch failed.
💔 hbaseprotoc 2m 17s root in the patch failed.
💚 javadoc 4m 47s the patch passed
💔 findbugs 0m 56s hbase-server in the patch failed.
💔 findbugs 8m 46s root in the patch failed.
_ Other Tests _
💔 unit 12m 11s root in the patch failed.
💚 asflicense 1m 22s The patch does not generate ASF License warnings.
124m 25s
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux fc85093077d9 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / 944108c
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/branch-site/book.html
mvninstall https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-mvninstall-root.txt
compile https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-compile-root.txt
cc https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-compile-root.txt
javac https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-compile-root.txt
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-site/book.html
shadedjars https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-shadedjars.txt
hadoopcheck https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-javac-2.8.5.txt
hadoopcheck https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-javac-2.9.2.txt
hadoopcheck https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-javac-3.1.2.txt
hbaseprotoc https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-hbaseprotoc-hbase-server.txt
hbaseprotoc https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-hbaseprotoc-root.txt
findbugs https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-findbugs-hbase-server.txt
findbugs https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-findbugs-root.txt
unit https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/artifact/out/patch-unit-root.txt
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/testReport/
Max. process+thread count 289 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/10/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.

@Apache-HBase
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
💙 reexec 0m 31s Docker mode activated.
_ Prechecks _
💚 dupname 0m 1s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 0m 34s Maven dependency ordering for branch
💚 mvninstall 5m 43s master passed
💚 compile 3m 16s master passed
💚 checkstyle 2m 57s master passed
💙 refguide 6m 12s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 2s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 4m 54s master passed
💙 spotbugs 3m 48s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 26m 26s master passed
_ Patch Compile Tests _
💙 mvndep 0m 15s Maven dependency ordering for patch
💚 mvninstall 5m 26s the patch passed
💚 compile 3m 19s the patch passed
💚 cc 3m 19s the patch passed
💚 javac 3m 19s the patch passed
💚 checkstyle 2m 57s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 2s The patch has no ill-formed XML file.
💙 refguide 6m 21s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 0s patch has no errors when building our shaded downstream artifacts.
💚 hadoopcheck 17m 4s Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
💚 hbaseprotoc 9m 7s the patch passed
💚 javadoc 5m 8s the patch passed
💚 findbugs 27m 7s the patch passed
_ Other Tests _
💔 unit 213m 19s root in the patch failed.
💚 asflicense 3m 34s The patch does not generate ASF License warnings.
362m 58s
Reason Tests
Failed junit tests hadoop.hbase.snapshot.TestExportSnapshotNoCluster
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/11/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux d8739debc6af 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / 5217618
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/11/artifact/out/branch-site/book.html
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/11/artifact/out/patch-site/book.html
unit https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/11/artifact/out/patch-unit-root.txt
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/11/testReport/
Max. process+thread count 4953 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/11/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.

@virajjasani
Copy link
Contributor Author

virajjasani commented Oct 3, 2019

@apurtell @anoopsjohn The status on this so far:

  1. Based on suggestion from Anoop, I will create a follow up Jira for turning this feature on/off using ConfigurationObserver(without HM restart) once this is ready for commit.
  2. Addressed recent review comments. Ready for review.

Thanks

Copy link
Contributor

@anoopsjohn anoopsjohn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

@virajjasani
Copy link
Contributor Author

@apurtell Please review as per your convenience

@Apache-HBase
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
💙 reexec 0m 29s Docker mode activated.
_ Prechecks _
💚 dupname 0m 0s No case conflicting files found.
💙 prototool 0m 0s prototool was not available.
💚 hbaseanti 0m 0s Patch does not have any anti-patterns.
💚 @author 0m 0s The patch does not contain any @author tags.
💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ master Compile Tests _
💙 mvndep 1m 2s Maven dependency ordering for branch
💚 mvninstall 5m 40s master passed
💚 compile 3m 18s master passed
💚 checkstyle 3m 6s master passed
💙 refguide 6m 19s branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 0s branch has no errors when building our shaded downstream artifacts.
💚 javadoc 4m 56s master passed
💙 spotbugs 3m 55s Used deprecated FindBugs config; considering switching to SpotBugs.
💚 findbugs 25m 43s master passed
_ Patch Compile Tests _
💙 mvndep 0m 14s Maven dependency ordering for patch
💚 mvninstall 5m 25s the patch passed
💚 compile 3m 15s the patch passed
💚 cc 3m 15s the patch passed
💚 javac 3m 15s the patch passed
💚 checkstyle 3m 3s the patch passed
💚 whitespace 0m 0s The patch has no whitespace issues.
💚 xml 0m 1s The patch has no ill-formed XML file.
💙 refguide 6m 13s patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect.
💚 shadedjars 5m 1s patch has no errors when building our shaded downstream artifacts.
💚 hadoopcheck 18m 16s Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2.
💚 hbaseprotoc 9m 45s the patch passed
💚 javadoc 4m 57s the patch passed
💚 findbugs 27m 6s the patch passed
_ Other Tests _
💔 unit 175m 4s root in the patch failed.
💚 asflicense 3m 6s The patch does not generate ASF License warnings.
326m 19s
Reason Tests
Failed junit tests hadoop.hbase.master.TestClusterRestartFailoverSplitWithoutZk
Subsystem Report/Notes
Docker Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/12/artifact/out/Dockerfile
GITHUB PR #600
Optional Tests dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile refguide xml cc hbaseprotoc prototool
uname Linux 2c13845cae63 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 GNU/Linux
Build tool maven
Personality /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-600/out/precommit/personality/provided.sh
git revision master / 2ad62b0
Default Java 1.8.0_181
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/12/artifact/out/branch-site/book.html
refguide https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/12/artifact/out/patch-site/book.html
unit https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/12/artifact/out/patch-unit-root.txt
Test Results https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/12/testReport/
Max. process+thread count 4622 (vs. ulimit of 10000)
modules C: hbase-protocol-shaded hbase-common hbase-hadoop-compat hbase-hadoop2-compat hbase-protocol hbase-client hbase-server . U: .
Console output https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-600/12/console
versions git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11
Powered by Apache Yetus 0.11.0 https://yetus.apache.org

This message was automatically generated.

@anoopsjohn anoopsjohn merged commit 14dcf1d into apache:master Oct 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants